143 research outputs found

    Successive Coordinate Search and Component-by-Component Construction of Rank-1 Lattice Rules

    Full text link
    The (fast) component-by-component (CBC) algorithm is an efficient tool for the construction of generating vectors for quasi-Monte Carlo rank-1 lattice rules in weighted reproducing kernel Hilbert spaces. We consider product weights, which assigns a weight to each dimension. These weights encode the effect a certain variable (or a group of variables by the product of the individual weights) has. Smaller weights indicate less importance. Kuo (2003) proved that the CBC algorithm achieves the optimal rate of convergence in the respective function spaces, but this does not imply the algorithm will find the generating vector with the smallest worst-case error. In fact it does not. We investigate a generalization of the component-by-component construction that allows for a general successive coordinate search (SCS), based on an initial generating vector, and with the aim of getting closer to the smallest worst-case error. The proposed method admits the same type of worst-case error bounds as the CBC algorithm, independent of the choice of the initial vector. Under the same summability conditions on the weights as in [Kuo,2003] the error bound of the algorithm can be made independent of the dimension dd and we achieve the same optimal order of convergence for the function spaces from [Kuo,2003]. Moreover, a fast version of our method, based on the fast CBC algorithm by Nuyens and Cools, is available, reducing the computational cost of the algorithm to O(d nlog⁡(n))O(d \, n \log(n)) operations, where nn denotes the number of function evaluations. Numerical experiments seeded by a Korobov-type generating vector show that the new SCS algorithm will find better choices than the CBC algorithm and the effect is better when the weights decay slower.Comment: 13 pages, 1 figure, MCQMC2016 conference (Stanford

    Conditional sampling for barrier option pricing under the LT method

    Full text link
    We develop a conditional sampling scheme for pricing knock-out barrier options under the Linear Transformations (LT) algorithm from Imai and Tan (2006). We compare our new method to an existing conditional Monte Carlo scheme from Glasserman and Staum (2001), and show that a substantial variance reduction is achieved. We extend the method to allow pricing knock-in barrier options and introduce a root-finding method to obtain a further variance reduction. The effectiveness of the new method is supported by numerical results

    Efficient calculation of the worst-case error and (fast) component-by-component construction of higher order polynomial lattice rules

    Full text link
    We show how to obtain a fast component-by-component construction algorithm for higher order polynomial lattice rules. Such rules are useful for multivariate quadrature of high-dimensional smooth functions over the unit cube as they achieve the near optimal order of convergence. The main problem addressed in this paper is to find an efficient way of computing the worst-case error. A general algorithm is presented and explicit expressions for base~2 are given. To obtain an efficient component-by-component construction algorithm we exploit the structure of the underlying cyclic group. We compare our new higher order multivariate quadrature rules to existing quadrature rules based on higher order digital nets by computing their worst-case error. These numerical results show that the higher order polynomial lattice rules improve upon the known constructions of quasi-Monte Carlo rules based on higher order digital nets

    Hot new directions for quasi-Monte Carlo research in step with applications

    Full text link
    This article provides an overview of some interfaces between the theory of quasi-Monte Carlo (QMC) methods and applications. We summarize three QMC theoretical settings: first order QMC methods in the unit cube [0,1]s[0,1]^s and in Rs\mathbb{R}^s, and higher order QMC methods in the unit cube. One important feature is that their error bounds can be independent of the dimension ss under appropriate conditions on the function spaces. Another important feature is that good parameters for these QMC methods can be obtained by fast efficient algorithms even when ss is large. We outline three different applications and explain how they can tap into the different QMC theory. We also discuss three cost saving strategies that can be combined with QMC in these applications. Many of these recent QMC theory and methods are developed not in isolation, but in close connection with applications

    Application of quasi-Monte Carlo methods to PDEs with random coefficients -- an overview and tutorial

    Full text link
    This article provides a high-level overview of some recent works on the application of quasi-Monte Carlo (QMC) methods to PDEs with random coefficients. It is based on an in-depth survey of a similar title by the same authors, with an accompanying software package which is also briefly discussed here. Embedded in this article is a step-by-step tutorial of the required analysis for the setting known as the uniform case with first order QMC rules. The aim of this article is to provide an easy entry point for QMC experts wanting to start research in this direction and for PDE analysts and practitioners wanting to tap into contemporary QMC theory and methods.Comment: arXiv admin note: text overlap with arXiv:1606.0661

    Lysolecithin, but not lecithin, improves nutrient digestibility and growth rates in young broilers

    Get PDF
    Young broilers have an underdeveloped ability for lipid digestion. The potential of lecithin and lysolecithin to improve lipid digestion and growth performance was investigated in 3 experiments: an in vitro model that mimics the intestinal conditions of the chick, a digestibility trial with chicks (5 to 7 days of age), and a performance trial until 21 days of age. In Experiment 1, palm oil (PO), palm oil with lecithin (PO+L), and palm oil with lysolecithin (PO+LY) were subjected to in vitro hydrolysis and applied to Caco-2 monolayers to assess lipid absorption. The in vitro hydrolysis rate of triglycerides was higher in PO+LY (k= 11.76 × 10-3 min-1) than in either PO (k= 9.73 × 10 3 min 1) or PO+L (k= 8.41 × 10 3 min 1), and the absorption of monoglycerides and free fatty acids was highest and free fatty acids was highest (P<0.01) for PO+LY. In Experiment 2, 90 broilers were assigned to three dietary treatments: a basal diet with 4% palm oil, and the basal diet supplemented with either 250 ppm lecithin or lysolecithin. ATTD of crude fat was higher in broilers supplemented with lysolecithin, but was lower in broilers supplemented with lecithin. DM digestibility and AMEn in birds supplemented with lysolecithin were significantly higher (3.03% and 0.47 MJ/kg, respectively). In Experiment 3, 480 broilers were randomly allocated to four dietary treatments: basal diet with soybean oil (2%), basal diet with lecithin (2%), soybean oil diet with 250 ppm lysolecithin, or lecithin oil diet with 250 ppm lysolecithin. Lecithin diets significantly reduced weight at day 10 and 21 compared with soybean oil. However, the addition of lysolecithin to lecithin-containing diets significantly improved bird performance. The results of these studies show that, in contrast to lecithin, lysolecithin is able to significantly improve the digestibility and energy values of feed in young broilers

    The experimental analysis of problematic video gaming and cognitive skills: a systematic review

    Get PDF
    There is now a growing literature demonstrating the excessive gaming can have negative detrimental effects on a small minority of gamers. This has led to much debate in the psychological literature on both the positive and negative effects of gaming. One specific area that has been investigated is the effect of gaming on different types of cognitive skills. The present study carried a systematic review examining the studies that have examined the impact of problematic gaming on cognitive skills. Following a number of inclusion and exclusion criteria, a total of 18 studies were identified that had investigated three specific cognitive skills: (i) multi-second time perception (4 studies), inhibition (7 studies), and decision-making (7 studies). Based on the studies reviewed, the findings demonstrate that the pathological and/or excessive use of videogames leads to more negative consequences on cognitive processes. Contexte et objectifs: Jouer aux jeux vidĂ©o est devenu l’une des activitĂ©s mondiales majeures avec des millions de personnes y jouant tous les jours. Suivant ce succĂšs, les jeux vidĂ©o ont grandement Ă©voluĂ©, multipliant les genres (p.ex., MMORPG, MOBA, FPS); certains de ces jeux demandant un grand investissement de la part des joueurs. Cet investissement peut devenir excessif, voire pathologique, et de nombreuses Ă©tudes ont explorĂ© ce risque, menant Ă  l’inclusion de l’« Usage pathologique des jeux sur Internet » dans l’appendice du DSM-5 (American Psychiatric Association, 2013). Cependant, malgrĂ© les risques sous-jacents d’une addiction (p.e.x., pertes de relations, difficultĂ©s scolaires), il a Ă©galement Ă©tĂ© dĂ©montrĂ© que le jeu vidĂ©o pouvait significativement amĂ©liorer les performances des joueurs (p.ex., performances sur un simulateur de chirurgie, Fanning, Fenton, Johnson, Johnson, & Rehman, 2011; meilleure recherche visuelle, Sims & Mayer, 2002). De plus, il a Ă©tĂ© dĂ©montrĂ© que le fait de jouer Ă  ces jeux pouvait impacter les capacitĂ©s cognitives des joueurs (p.ex., Durlach, Kring, & Bowens, 2009). Une revue systĂ©matique sur l’impact d’une utilisation pathologique/excessive sur ces capacitĂ©s a donc Ă©tĂ© menĂ©e. MĂ©thode: La recherche d’articles a Ă©tĂ© menĂ©e sur quatre bases de donnĂ©es (p.ex., Google Scholar, PubMed, Science Direct, PsychINFO). Afin d’ĂȘtre inclus dans cette revue, les articles revus par les pairs devaient: (i) dater d’au moins 2000 (les jeux vidĂ©o ayant grandement Ă©voluĂ© depuis), (ii) inclure au moins une Ă©tude expĂ©rimentale sur les processus cognitifs des joueurs, (iii) inclure des joueurs excessifs/pathologiques, (iv) ĂȘtre publiĂ©s en anglais, et (v) Ne pas avoir Ă©tĂ© utilisĂ©s dans une revue de littĂ©rature auparavant (p.ex., Ă©tudes en fMRI). AprĂšs sĂ©lection des articles et tri des doublons, la recherche a menĂ© Ă  18 rĂ©sultats dans 3 sections diffĂ©rentes (c.-Ă -d., Perception du temps supĂ©rieur Ă  la seconde, Inhibition, et Prise de dĂ©cision). RĂ©sultats: Les expĂ©riences sur la perception du temps montrent des rĂ©sultats hĂ©tĂ©rogĂšnes, certaines Ă©tudes ne montrant aucun rĂ©sultat (p.ex.., Rivero, Covre, Reyes, & Bueno, 2012), d’autres des rĂ©sultats partiels (Rau, Peng, & Yang, 2006), voire des rĂ©sultats significatifs (Tobin & Grondin, 2009). Cependant, les Ă©tudes dĂ©montrant des rĂ©sultats (potentiellement) significatifs incluaient des utilisateurs pathologiques, contrairement aux Ă©tudes sans rĂ©sultats significatifs. Cette diffĂ©rence de population pouvant potentiellement expliquer cette diffĂ©rence. Les Ă©tudes sur l’inhibition montrent le mĂȘme type de rĂ©sultats hĂ©tĂ©rogĂšnes, cependant, une fois ces Ă©tudes classĂ©es par type d’inhibition, il apparaĂźtrait que les joueurs montrent une inhibition de la rĂ©ponse prĂ©potente rĂ©duite (p.ex., au travers de tĂąches Go/Nogo, Littel et al., 2012), celle-ci Ă©tant aggravĂ©e lorsque des stimuli liĂ©s aux jeux Ă©taient inclus dans la tĂąche (p.ex., Liu et al., 2014). Cependant, la seule Ă©tude ayant explorĂ© l’annulation d’une rĂ©ponse prĂ©potente n’a pas dĂ©montrĂ© d’inhibition rĂ©duite, les joueurs de jeux-vidĂ©os d’action prĂ©sentant des temps de rĂ©actions rĂ©duits (Colzato, van den Wildenberg, Zmigrod, & Hommel, 2013). Finalement, Les Ă©tudes sur la prise de dĂ©cision montrent des rĂ©sultats similaires au travers des Ă©tudes, c’est-Ă -dire des lacunes Ă  prendre des dĂ©cisions dans des contextes de risque (p.ex., Pawlikowski & Brand, 2011), une prise de dĂ©cision intacte dans les tĂąches Ă  contextes ambigus (p.ex., Nuyens et al., 2016), et une tendance Ă  prĂ©fĂ©rer une rĂ©compense moindre immĂ©diate Ă  une rĂ©compense plus importante aprĂšs un dĂ©lai variable (Weinstein, Abu, Timor, & Mama, 2016). Discussion: MalgrĂ© les divers rĂ©sultats contraires, et le peu d’étude sur certains processus, il est clair qu’une utilisation pathologique des jeux vidĂ©o peut mener Ă  des difficultĂ©s cognitives. Cependant, sachant que les Ă©tudes sur les performances susmentionnĂ©es ne recrutaient que des participants sains, il est supposable qu’une utilisation normale mĂšnerait Ă  des performances amĂ©liorĂ©es, sans aucune contrepartie nĂ©gative. Plus d’études seraient donc nĂ©cessaires afin de dĂ©terminer l’impact diffĂ©rent des jeux vidĂ©o sur les processus cognitifs en fonction du degrĂ© et type d’utilisation de ceux-ci (c.-Ă -d., utilisation occasionnelle, frĂ©quente, ou pathologique)
    • 

    corecore